26 research outputs found

    A Bayesian Hierarchical Model for Comparative Evaluation of Teaching Quality Indicators in Higher Education

    Full text link
    The problem motivating the paper is the quantification of students' preferences regarding teaching/coursework quality, under certain numerical restrictions, in order to build a model for identifying, assessing and monitoring the major components of the overall academic quality. After reviewing the strengths and limitations of conjoint analysis and of the random coefficient regression model used in similar problems in the past, we propose a Bayesian beta regression model with a Dirichlet prior on the model coefficients. This approach not only allows for the incorporation of informative prior when it is available but also provides user friendly interfaces and direct probability interpretations for all quantities. Furthermore, it is a natural way to implement the usual constraints for the model weights/coefficients. This model was applied to data collected in 2009 and 2013 from undergraduate students in Panteion University, Athens, Greece and besides the construction of an instrument for the assessment and monitoring of teaching quality, it gave some input for a preliminary discussion on the association of the differences in students preferences between the two time periods with the current Greek economic and financial crisis

    Power-Expected-Posterior Priors for Variable Selection in Gaussian Linear Models

    Full text link
    In the context of the expected-posterior prior (EPP) approach to Bayesian variable selection in linear models, we combine ideas from power-prior and unit-information-prior methodologies to simultaneously produce a minimally-informative prior and diminish the effect of training samples. The result is that in practice our power-expected-posterior (PEP) methodology is sufficiently insensitive to the size n* of the training sample, due to PEP's unit-information construction, that one may take n* equal to the full-data sample size n and dispense with training samples altogether. In this paper we focus on Gaussian linear models and develop our method under two different baseline prior choices: the independence Jeffreys (or reference) prior, yielding the J-PEP posterior, and the Zellner g-prior, leading to Z-PEP. We find that, under the reference baseline prior, the asymptotics of PEP Bayes factors are equivalent to those of Schwartz's BIC criterion, ensuring consistency of the PEP approach to model selection. We compare the performance of our method, in simulation studies and a real example involving prediction of air-pollutant concentrations from meteorological covariates, with that of a variety of previously-defined variants on Bayes factors for objective variable selection. Our prior, due to its unit-information structure, leads to a variable-selection procedure that (1) is systematically more parsimonious than the basic EPP with minimal training sample, while sacrificing no desirable performance characteristics to achieve this parsimony; (2) is robust to the size of the training sample, thus enjoying the advantages described above arising from the avoidance of training samples altogether; and (3) identifies maximum-a-posteriori models that achieve good out-of-sample predictive performance

    Prior distributions for objective Bayesian analysis

    Get PDF
    We provide a review of prior distributions for objective Bayesian analysis. We start by examining some foundational issues and then organize our exposition into priors for: i) estimation or prediction; ii) model selection; iii) highdimensional models. With regard to i), we present some basic notions, and then move to more recent contributions on discrete parameter space, hierarchical models, nonparametric models, and penalizing complexity priors. Point ii) is the focus of this paper: it discusses principles for objective Bayesian model comparison, and singles out some major concepts for building priors, which are subsequently illustrated in some detail for the classic problem of variable selection in normal linear models. We also present some recent contributions in the area of objective priors on model space.With regard to point iii) we only provide a short summary of some default priors for high-dimensional models, a rapidly growing area of research

    Stochastic optimisation methods for cost-effective quality assessment in health

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN041426 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Power-expected-posterior priors for generalized linear models

    Get PDF
    The power-expected-posterior (PEP) prior provides an objective, automatic, consistent and parsimonious model selection procedure. At the same time it resolves the conceptual and computational problems due to the use of imaginary data. Namely, (i) it dispenses with the need to select and average across all possible minimal imaginary samples, and (ii) it diminishes the effect that the imaginary data have upon the posterior distribution. These attributes allow for large sample approximations, when needed, in order to reduce the computational burden under more complex models. In this work we generalize the applicability of the PEP methodology, focusing on the framework of generalized linear models (GLMs), by introducing two new PEP definitions which are in effect applicable to any general model setting. Hyper-prior extensions for the power parameter that regulates the contribution of the imaginary data are introduced. We further study the validity of the predictive matching and of the model selection consistency, providing analytical proofs for the former and empirical evidence supporting the latter. For estimation of posterior model and inclusion probabilities we introduce a tuning-free Gibbs-based variable selection sampler. Several simulation scenarios and one real life example are considered in order to evaluate the performance of the proposed methods compared to other commonly used approaches based on mixtures of g-priors. Results indicate that the GLM-PEP priors are more effective in the identification of sparse and parsimonious model formulations

    Prior Distributions for Objective Bayesian Analysis

    Get PDF
    We provide a review of prior distributions for objective Bayesian analysis. We start by examining some foundational issues and then organize our exposition into priors for: i) estimation or prediction; ii) model selection; iii) highdimensional models. With regard to i), we present some basic notions, and then move to more recent contributions on discrete parameter space, hierarchical models, nonparametric models, and penalizing complexity priors. Point ii) is the focus of this paper: it discusses principles for objective Bayesian model comparison, and singles out some major concepts for building priors, which are subsequently illustrated in some detail for the classic problem of variable selection in normal linear models. We also present some recent contributions in the area of objective priors on model space.With regard to point iii) we only provide a short summary of some default priors for high-dimensional models, a rapidly growing area of research

    A Case Study of Stochastic Optimization in Health Policy: Problem Formulation and Preliminary Results

    No full text
    We use Bayesian decision theory to address a variable selection problem arising in attempts to indirectly measure the quality of hospital care, by comparing observed mortality rates to expected values based on patient sickness at admission. Our method weighs data collection costs against predictive accuracy to find an optimal subset of the available admission sickness variables. The approach involves maximizing expected utility across possible subsets, using Monte Carlo methods based on random division of the available data into N modeling and validation splits to approximate the expectation. After exploring the geometry of the solution space, we compare a variety of stochastic optimization methods--including genetic algorithms (GA), simulated annealing (SA), tabu search (TS), threshold acceptance (TA), and messy simulated annealing (MSA)--on their performance in finding good subsets of variables, and we clarify the role of N in the optimization. Preliminary results indicate that TS is somewhat better than TA and SA in this problem, with MSA and GA well behind the other three methods. Sensitivity analysis reveals broad stability of our conclusions
    corecore